The DCI index: Discounted cumulated impact-based research evaluation

نویسندگان

  • Kalervo Järvelin
  • Olle Persson
چکیده

Research evaluation is increasingly popular and important among research funding bodies and science policy makers. Various indicators have been proposed to evaluate the standing of individual scientists, institutions, journals, or countries. A simple and popular one among the indicators is the h-index, the Hirsch index (Hirsch 2005), which is an indicator for lifetime achievement of a scholar. Several other indicators have been proposed to complement or balance the h-index. However, these indicators have no conception of aging. The AR-index (Jin & al. 2007) incorporates aging but divides the received citation counts by the raw age of the publication. Consequently, the decay of a publication is very steep and insensitive to disciplinary differences. In addition, we believe that a publication becomes outdated only when it is no more cited, not because of its age. Finally, all indicators treat citations as equally material when one might reasonably think, that a citation from a heavily cited publication Järvelin – Persson DFC 2 should weigh more than a citation from a non-cited or little-cited publication. We propose a new indicator, the DCI-index (for Discounted Cumulated Impact), which devalues old citations in a smooth way. It rewards an author for receiving new citations even if the publication is old. Further, it allows weighting of the citations by the citation weight of the citing publication. DCI can be used to calculate research performance on the basis of the hcore of a scholar or any other publication data set. Finally, it supports comparing research performance to the average performance in the domain and across domains as well.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Measuring Impact of 12 Information Scientists Using the DCI index

The Discounted Cumulated Impact (DCI) index has recently been proposed for research evaluation. In the present work, an earlier data set by Cronin & Meho (2007) is reanalyzed, with the aim of exemplifying the salient features of the DCI index. We apply the index on, and compare our results to the outcomes of, the Cronin-Meho (2007) study. Both authors and their top publications are used as unit...

متن کامل

Discounted Cumulated Gain Based Evaluation of Multiple-Query IR Sessions

IR research has a strong tradition of laboratory evaluation of systems. Such research is based on test collections, pre-defined test topics, and standard evaluation metrics. While recent research has emphasized the user viewpoint by proposing user-based metrics and non-binary relevance assessments, the methods are insufficient for truly user-based evaluation. The common assumption of a single q...

متن کامل

Interactive Analysis and Exploration of Experimental Evaluation Results

This paper proposes a methodology based on discounted cumulated gain measures and visual analytics techniques in order to improve the analysis and understanding of IR experimental evaluation results. The proposed methodology is geared to favour a natural and effective interaction of the researchers and developers with the experimental data and it is demonstrated by developing an innovative appl...

متن کامل

Cumulated Relative Position: A Metric for Ranking Evaluation

Measuring is a key to scientific progress. This is particularly true for research concerning complex systems. Multilingual and multimedia information access systems, such as search engines, are increasingly complex: they need to satisfy diverse user needs and support challenging tasks. Their development calls for proper evaluation methodologies to ensure that they meet the expected user require...

متن کامل

Binary and graded relevance in IR evaluations--Comparison of the effects on ranking of IR systems

In this study the rankings of IR systems based on binary and graded relevance in TREC 7 and 8 data are compared. Relevance of a sample TREC results is reassessed using a relevance scale with four levels: non-relevant, marginally relevant, fairly relevant, highly relevant. Twenty-one topics and 90 systems from TREC 7 and 20 topics and 121 systems from TREC 8 form the data. Binary precision, and ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • JASIST

دوره 59  شماره 

صفحات  -

تاریخ انتشار 2008